80 research outputs found
Hybrid Whale-Mud-Ring Optimization for Precise Color Skin Cancer Image Segmentation
Timely identification and treatment of rapidly progressing skin cancers can
significantly contribute to the preservation of patients' health and
well-being. Dermoscopy, a dependable and accessible tool, plays a pivotal role
in the initial stages of skin cancer detection. Consequently, the effective
processing of digital dermoscopy images holds significant importance in
elevating the accuracy of skin cancer diagnoses. Multilevel thresholding is a
key tool in medical imaging that extracts objects within the image to
facilitate its analysis. In this paper, an enhanced version of the Mud Ring
Algorithm hybridized with the Whale Optimization Algorithm, named WMRA, is
proposed. The proposed approach utilizes bubble-net attack and mud ring
strategy to overcome stagnation in local optima and obtain optimal thresholds.
The experimental results show that WMRA is powerful against a cluster of recent
methods in terms of fitness, Peak Signal to Noise Ratio (PSNR), and Mean Square
Error (MSE)
Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review
Globally, the external Internet is increasingly being connected to the
contemporary industrial control system. As a result, there is an immediate need
to protect the network from several threats. The key infrastructure of
industrial activity may be protected from harm by using an intrusion detection
system (IDS), a preventive measure mechanism, to recognize new kinds of
dangerous threats and hostile activities. The most recent artificial
intelligence (AI) techniques used to create IDS in many kinds of industrial
control networks are examined in this study, with a particular emphasis on
IDS-based deep transfer learning (DTL). This latter can be seen as a type of
information fusion that merge, and/or adapt knowledge from multiple domains to
enhance the performance of the target task, particularly when the labeled data
in the target domain is scarce. Publications issued after 2015 were taken into
account. These selected publications were divided into three categories:
DTL-only and IDS-only are involved in the introduction and background, and
DTL-based IDS papers are involved in the core papers of this review.
Researchers will be able to have a better grasp of the current state of DTL
approaches used in IDS in many different types of networks by reading this
review paper. Other useful information, such as the datasets used, the sort of
DTL employed, the pre-trained network, IDS techniques, the evaluation metrics
including accuracy/F-score and false alarm rate (FAR), and the improvement
gained, were also covered. The algorithms, and methods used in several studies,
or illustrate deeply and clearly the principle in any DTL-based IDS subcategory
are presented to the reader
On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as load
disaggregation algorithms, are fundamental tools for effective energy
management. Despite the success of deep models in load disaggregation, they
face various challenges, particularly those pertaining to privacy and security.
This paper investigates the sensitivity of prominent deep NILM baselines to
adversarial attacks, which have proven to be a significant threat in domains
such as computer vision and speech recognition. Adversarial attacks entail the
introduction of imperceptible noise into the input data with the aim of
misleading the neural network into generating erroneous outputs. We investigate
the Fast Gradient Sign Method (FGSM), a well-known adversarial attack, to
perturb the input sequences fed into two commonly employed CNN-based NILM
baselines: the Sequence-to-Sequence (S2S) and Sequence-to-Point (S2P) models.
Our findings provide compelling evidence for the vulnerability of these models,
particularly the S2P model which exhibits an average decline of 20\% in the
F1-score even with small amounts of noise. Such weakness has the potential to
generate profound implications for energy management systems in residential and
industrial sectors reliant on NILM models
Appliance identification using a histogram post-processing of 2D local binary patterns for smart grid applications
Identifying domestic appliances in the smart grid leads to a better power
usage management and further helps in detecting appliance-level abnormalities.
An efficient identification can be achieved only if a robust feature extraction
scheme is developed with a high ability to discriminate between different
appliances on the smart grid. Accordingly, we propose in this paper a novel
method to extract electrical power signatures after transforming the power
signal to 2D space, which has more encoding possibilities. Following, an
improved local binary patterns (LBP) is proposed that relies on improving the
discriminative ability of conventional LBP using a post-processing stage. A
binarized eigenvalue map (BEVM) is extracted from the 2D power matrix and then
used to post-process the generated LBP representation. Next, two histograms are
constructed, namely up and down histograms, and are then concatenated to form
the global histogram. A comprehensive performance evaluation is performed on
two different datasets, namely the GREEND and WITHED, in which power data were
collected at 1 Hz and 44000 Hz sampling rates, respectively. The obtained
results revealed the superiority of the proposed LBP-BEVM based system in terms
of the identification performance versus other 2D descriptors and existing
identification frameworks.Comment: 8 pages, 10 figures and 5 table
Artificial Intelligence based Anomaly Detection of Energy Consumption in Buildings: A Review, Current Trends and New Perspectives
Enormous amounts of data are being produced everyday by sub-meters and smart
sensors installed in residential buildings. If leveraged properly, that data
could assist end-users, energy producers and utility companies in detecting
anomalous power consumption and understanding the causes of each anomaly.
Therefore, anomaly detection could stop a minor problem becoming overwhelming.
Moreover, it will aid in better decision-making to reduce wasted energy and
promote sustainable and energy efficient behavior. In this regard, this paper
is an in-depth review of existing anomaly detection frameworks for building
energy consumption based on artificial intelligence. Specifically, an extensive
survey is presented, in which a comprehensive taxonomy is introduced to
classify existing algorithms based on different modules and parameters adopted,
such as machine learning algorithms, feature extraction approaches, anomaly
detection levels, computing platforms and application scenarios. To the best of
the authors' knowledge, this is the first review article that discusses anomaly
detection in building energy consumption. Moving forward, important findings
along with domain-specific problems, difficulties and challenges that remain
unresolved are thoroughly discussed, including the absence of: (i) precise
definitions of anomalous power consumption, (ii) annotated datasets, (iii)
unified metrics to assess the performance of existing solutions, (iv) platforms
for reproducibility and (v) privacy-preservation. Following, insights about
current research trends are discussed to widen the applications and
effectiveness of the anomaly detection technology before deriving future
directions attracting significant attention. This article serves as a
comprehensive reference to understand the current technological progress in
anomaly detection of energy consumption based on artificial intelligence.Comment: 11 Figures, 3 Table
Novel Area-Efficient and Flexible Architectures for Optimal Ate Pairing on FPGA
While FPGA is a suitable platform for implementing cryptographic algorithms,
there are several challenges associated with implementing Optimal Ate pairing
on FPGA, such as security, limited computing resources, and high power
consumption. To overcome these issues, this study introduces three approaches
that can execute the optimal Ate pairing on Barreto-Naehrig curves using
Jacobean coordinates with the goal of reaching 128-bit security on the Genesys
board. The first approach is a pure software implementation utilizing the
MicroBlaze processor. The second involves a combination of software and
hardware, with key operations in and being transformed into
IP cores for the MicroBlaze. The third approach builds on the second by
incorporating parallelism to improve the pairing process. The utilization of
multiple MicroBlaze processors within a single system offers both versatility
and parallelism to speed up pairing calculations. A variety of methods and
parameters are used to optimize the pairing computation, including Montgomery
modular multiplication, the Karatsuba method, Jacobean coordinates, the Complex
squaring method, sparse multiplication, squaring in , and
the addition chain method. The proposed systems are designed to efficiently
utilize limited resources in restricted environments, while still completing
tasks in a timely manner.Comment: 13 pages, 8 figures, and 5 table
Automated liver tissues delineation based on machine learning techniques: A survey, current trends and future orientations
There is no denying how machine learning and computer vision have grown in
the recent years. Their highest advantages lie within their automation,
suitability, and ability to generate astounding results in a matter of seconds
in a reproducible manner. This is aided by the ubiquitous advancements reached
in the computing capabilities of current graphical processing units and the
highly efficient implementation of such techniques. Hence, in this paper, we
survey the key studies that are published between 2014 and 2020, showcasing the
different machine learning algorithms researchers have used to segment the
liver, hepatic-tumors, and hepatic-vasculature structures. We divide the
surveyed studies based on the tissue of interest (hepatic-parenchyma,
hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more
than one task simultaneously. Additionally, the machine learning algorithms are
classified as either supervised or unsupervised, and further partitioned if the
amount of works that fall under a certain scheme is significant. Moreover,
different datasets and challenges found in literature and websites, containing
masks of the aforementioned tissues, are thoroughly discussed, highlighting the
organizers original contributions, and those of other researchers. Also, the
metrics that are used excessively in literature are mentioned in our review
stressing their relevancy to the task at hand. Finally, critical challenges and
future directions are emphasized for innovative researchers to tackle, exposing
gaps that need addressing such as the scarcity of many studies on the vessels
segmentation challenge, and why their absence needs to be dealt with in an
accelerated manner.Comment: 41 pages, 4 figures, 13 equations, 1 table. A review paper on liver
tissues segmentation based on automated ML-based technique
Deep Transfer Learning for Automatic Speech Recognition: Towards Better Generalization
Automatic speech recognition (ASR) has recently become an important challenge
when using deep learning (DL). It requires large-scale training datasets and
high computational and storage resources. Moreover, DL techniques and machine
learning (ML) approaches in general, hypothesize that training and testing data
come from the same domain, with the same input feature space and data
distribution characteristics. This assumption, however, is not applicable in
some real-world artificial intelligence (AI) applications. Moreover, there are
situations where gathering real data is challenging, expensive, or rarely
occurring, which can not meet the data requirements of DL models. deep transfer
learning (DTL) has been introduced to overcome these issues, which helps
develop high-performing models using real datasets that are small or slightly
different but related to the training data. This paper presents a comprehensive
survey of DTL-based ASR frameworks to shed light on the latest developments and
helps academics and professionals understand current challenges. Specifically,
after presenting the DTL background, a well-designed taxonomy is adopted to
inform the state-of-the-art. A critical analysis is then conducted to identify
the limitations and advantages of each framework. Moving on, a comparative
study is introduced to highlight the current challenges before deriving
opportunities for future research
Cloud Energy Micro-Moment Data Classification: A Platform Study
Energy efficiency is a crucial factor in the well-being of our planet. In
parallel, Machine Learning (ML) plays an instrumental role in automating our
lives and creating convenient workflows for enhancing behavior. So, analyzing
energy behavior can help understand weak points and lay the path towards better
interventions. Moving towards higher performance, cloud platforms can assist
researchers in conducting classification trials that need high computational
power. Under the larger umbrella of the Consumer Engagement Towards Energy
Saving Behavior by means of Exploiting Micro Moments and Mobile Recommendation
Systems (EM)3 framework, we aim to influence consumers behavioral change via
improving their power consumption consciousness. In this paper, common cloud
artificial intelligence platforms are benchmarked and compared for micro-moment
classification. The Amazon Web Services, Google Cloud Platform, Google Colab,
and Microsoft Azure Machine Learning are employed on simulated and real energy
consumption datasets. The KNN, DNN, and SVM classifiers have been employed.
Superb performance has been observed in the selected cloud platforms, showing
relatively close performance. Yet, the nature of some algorithms limits the
training performance.Comment: This paper has been accepted in IEEE RTDPCC 2020: International
Symposium on Real-time Data Processing for Cloud Computin
Deep and transfer learning for building occupancy detection: A review and comparative analysis
The building internet of things (BIoT) is quite a promising concept for curtailing energy consumption, reducing costs, and promoting building transformation. Besides, integrating artificial intelligence (AI) into the BIoT is essential for data analysis and intelligent decision-making. Thus, data-driven approaches to infer occupancy patterns usage are gaining growing interest in BIoT applications. Typically, analyzing big occupancy data gathered by BIoT networks helps significantly identify the causes of wasted energy and recommend corrective actions. Within this context, building occupancy data aids in the improvement of the efficacy of energy management systems, allowing the reduction of energy consumption while maintaining occupant comfort. Occupancy data might be collected using a variety of devices. Among those devices are optical/thermal cameras, smart meters, environmental sensors such as carbon dioxide (CO2), and passive infrared (PIR). Even though the latter methods are less precise, they have generated considerable attention owing to their inexpensive cost and low invasive nature. This article provides an in-depth survey of the strategies used to analyze sensor data and determine occupancy. The article's primary emphasis is on reviewing deep learning (DL), and transfer learning (TL) approaches for occupancy detection. This work investigates occupancy detection methods to develop an efficient system for processing sensor data while providing accurate occupancy information. Moreover, the paper conducted a comparative study of the readily available algorithms for occupancy detection to determine the optimal method in regards to training time and testing accuracy. The main concerns affecting the current occupancy detection system in terms of privacy and precision were thoroughly discussed. For occupancy detection, several directions were provided to avoid or reduce privacy problems by employing forthcoming technologies such as edge devices, Federated learning, and Blockchain-based IoT. 2022 The AuthorsThis paper was made possible by the Graduate Assistant-ship (GA) program provided from Qatar University (QU). The statements made herein are solely the responsibility of the authors. Open Access funding provided by the Qatar National Library.Scopu
- …